Previous Blogs

November 7, 2023
The Rapidly Evolving State of Generative AI

November 2, 2023
Cisco’s Webex Extends Generative AI into Collaboration

October 31, 2023
Lenovo Unites Businesses and AI Strategy

October 24, 2023
Qualcomm’s Snapdragon X Elite Solidifies New Era of AI PCs

October 10, 2023
HP Highlights PC Design Innovation

September 22, 2023
Microsoft Copilot Updates Push GenAI to the Mainstream

September 19, 2023
Intel Hopes to Reinvent the PC with Core Ultra SOC

September 6, 2023
Google Starts GenAI Productivity Onslaught with Duet AI for Workspace Release

August 16, 2023
Why Generative AI is so Unlike Other Major Tech Trends

August 9, 2023
Nvidia Enhances GenAI Offerings for Enterprise

July 31, 2023
Challenges Remain for Generative AI Tools

July 27, 2023
Generative AI Study Uncovers Surprising Facts on Business Usage

July 26, 2023
Samsung Works to Bring Foldables to the Mainstream

June 21, 2023
HPE Melds Supercomputing and Generative AI

June 14, 2023
AMD Delivers Generative AI Vision

June 6, 2023
Apple wants to redefine computing with Vision Pro headset

June 1, 2023
Hybrid AI is moving generative AI tech from the cloud to our devices

May 23, 2023
Dell and Nvidia Partner to Create Generative AI Solutions for Businesses

May 9, 2023
IBM Unleashes Generative AI Strategy With watsonx

May 4, 2023
Amazon’s Generative AI Strategy Focuses on Choice

April 20, 2023
Latest Cadence Tools Bring Generative AI to Chip and System Design

March 30, 2023
Amazon Enables Sidewalk Network for IoT Applications

March 16, 2023
Microsoft 365 Copilot Enables the Digital Assistants We’ve Always Wanted

March 14, 2023
Google Unveils Generative AI Tools for Workspace and GCP

March 9, 2023
Lenovo Revs Desktop Workstations with Aston Martin

March 1, 2023
MWC Analysis: The Computerized, Cloudified 5G Network is Getting Real

February 23, 2023
Early MWC News Shows Renewed Emphasis on 5G Infrastructure

February 1, 2023
Samsung Looking to Impact the PC Market

January 18, 2023
The Surprise Winner for Generative AI

January 5, 2023
AI To Go Mainstream in 2023

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

November 13, 2023
IBM Extends Its Goals for AI and Quantum Computing

By Bob O'Donnell

While no one ever doubts the heritage of technological advancements that IBM has made over the last several decades, there are certainly some who’ve wondered recently whether the company is able to sustain those types of efforts into the future. At a recent analyst day at their historic Thomas J. Watson Research Center, IBM made a convincing argument that they are up to the task, especially in the fields of AI—generative AI, in particular—as well as quantum computing.

What was particularly notable was the fact that the company showed a much tighter connection between the work its research group is doing on advanced technologies and the rapid “productization” of this work into its commercial product organizations. In both prepared remarks and in response to questions, it was clear that there’s a renewed focus to ensure that the two groups are in lockstep with regards to their future outlook and development priorities.

As with many other organizations, that hasn’t always been the case with IBM. The result has been that some potentially interesting research efforts haven’t always made it to the market. Thanks to a very clear directive from CEO Arvind Krishna (who used to run IBM Research) about the company’s need to focus on a few specific areas—hybrid cloud, AI and quantum—current research director Dario Gil said that the coordination between research and commercial products groups has never been stronger. The net result should be—and is starting to show—important new capabilities that are making it into commercial products at a much faster pace.

One real-world impact of this new strategic initiative is the company’s very rapid development of its suite of AI tools they call watsonx. First unveiled at the company’s Think conference earlier this year (see “IBM Unleashes Generative AI Strategy With watsonx” for more) watsonx continues to evolve, driven in large part by new capabilities first developed by the IBM research group. What was particularly impressive at the recent analyst event was the number of real-world applications and customer examples using watsonx that IBM was able to talk about. While admitting that many organizations are still in the exploratory and proof-of-concept phase when it comes to GenAI, there were still a solid set of company logos from real-world implementations that they shared. In addition, IBM had an impressively thorough taxonomy of applications for which companies are starting to use watsonx and genAI.

On the application front, IBM noted that the top applications that it’s starting to see companies leverage GenAI for fall into three main categories: Digital Labor or HR-related activities, Customer Care or customer support, and App Modernization or code creation. Within those categories the company discussed content creation, summarization, classification, and coding applications. Given the long history of older mainframe-related software that run on IBM mainframes, IBM noted particular interest in companies who want to move from old COBOL code to modern programming languages with the help of GenAI-powered tools.

In addition to applications, IBM talked about a number of technologies it’s working on within its research group to improve its watsonx offerings. Specifically, IBM discussed its efforts in Performance and Scale, Model Customization, Governance and Application Enablement. For Performance, IBM said that it’s working on a variety of new ways to improve the efficiency of how large foundation models. It’s doing that through various combinations of technologies that do things like shrink the model size via quantization, improve the ability to share limited compute resources with GPU fractioning, and more.

Given its open-source focus, IBM also provided more details on all the work it’s doing with AI application framework tool Pytorch, which Meta made open source back in 2017. By leveraging the open-source community as well as its own efforts, the company talked about how it’s making significant improvements in both optimizing model performance and opening up the possibility of running Pytorch-built models across a wide range of different computing chip architectures from multiple vendors. Adding a hardware abstraction layer like Pytorch opens up the potential for a much wider range of programmers to build or customize GenAI models. The reason is that models can be created with these tools using languages such as Javascript that are much more widely known than the chip-specific tools and their lower-level language requirements. At the same time, these hardware abstraction layers often end up adding fairly significant performance penalties because of their high-level nature (an issue that Nvidia’s Cuda software tools don’t suffer from). With the new Pytorch 2.0, however, IBM said they and others are making concerted efforts to reduce that impact by better organizing where various types of optimization layers need to be and, as a result, are getting closer to “on the metal” performance.

On the Model Customization front, it’s clear IBM is spending a great deal of effort because they’ve recognized that very few companies are actually building their own models—most are simply customizing or fine-tuning existing ones. (To read more about that development and some of its potential industry implications, check out my recent column “The Rapidly Evolving State of Generative AI”.) To that end, they discussed foundation model tuning techniques such as LoRA (Low Rank Adaptation), parameter efficient tuning, multi-task prompt tuning, and more, all of which are expected to be commercialized within watsonx in the not-to-distant future. They also described the need to provide educational guidance in the model-building process to help developers pick the right size model and data sets for a given task. While this may sound simplistic, it’s an absolutely essential requirement as even basic knowledge about how GenAI models are built and function is much more limited than people realize (or are willing to admit!).

IBM’s efforts on Governance—that is, the tracking and reporting of details around how a model is built and evolved, they data used to create it, etc.—look to be an extremely important and key differentiating capability for the company. This is particularly true in regulated industries and environments where the company has a large customer base. While more details on IBM’s specific governance capabilities are expected shortly, they did share some of the work they’re doing on providing guardrails to prevent the inclusion of biases, social stigmas, obscene content, and personally identifiable information (PII) into datasets intended for model ingestion. In addition, they talked about some of the work on risk assessment and prevention that they’ve done. IBM recently announced that they will be offering indemnification for customers who use their foundation models so that they can avoid any IP protection-related lawsuits. Together with this governance work, these two efforts clearly demonstrate that IBM is in a market-leading position when it comes to critical concerns that some companies have about the trust and reliability of GenAI technology in general.

In the area of Application Enablement, IBM talked a great deal about the work it’s doing around Retrieval Augmented Generation (RAG). RAG is a relatively new technique that supercharges the inferencing process, makes it significantly easier and more cost-efficient for companies to leverage their own data, and eases the process of fine-tune existing foundation models so that organizations don’t have to worry about creating models of their own. IBM says it has already seen a number of its customers start to experiment with and/or adopt RAG techniques so it’s working on refining its capabilities there to make the creation of more useful GenAI applications much easier for its customers.

In the world of quantum computing, IBM is already seen as a leader, in large part because of the amount of time they’ve already spent working on discussing the innovations they’ve made there. What was particularly impressive at the analyst event, however, was that the company showed off a detailed technology road map that extends all the way out to 2030. While some tech companies are willing to share their plans a few years out, it’s virtually unheard of for a company to provide this much information so far in advance. In part, IBM recognizes that they need to do it because quantum computing is such a dramatic and forward-looking technology that many potential customers feel the need to know how they can plan for it. To put it simply, they want to understand what’s coming in order to bet on the roadmap.

Full details of the specific IBM quantum computing developments will be unveiled at an event that the company will be hosting in early December. Suffice it to say, however, that the company continues to be at the cutting-edge of this technology and is growing increasingly confident about its ability to eventually make it into mainstream enterprise computing.

Given the long, sad history of early technology companies who no longer exist, it’s certainly understandable why some harbor doubts about the 112-year-old IBM’s ability to continue innovating. As it recently demonstrated, however, not only is that spirit of invention still alive, it looks to be gaining some serious steam.

Here’s a link to the original column: https://www.linkedin.com/pulse/ibm-extends-its-goals-ai-quantum-computing-bob-o-donnell-qomnc

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.